Goto

Collaborating Authors

 background section



Reviews: Modeling Uncertainty by Learning a Hierarchy of Deep Neural Connections

Neural Information Processing Systems

Here are my comments for the paper: - B2N, RAI, and GGT abbreviations are never defined in the paper; the have been just cited from previous works (minor). A short background section on these methods can also include their full name. As far as I understand, the proposed method is B2N with B-RAI instead of RAI which was originally proposed in [25]. This allows the model to sample multiple generative and discriminative structures, and as a result create an ensemble of networks with possibly different structures and parameters. Maybe a better way for structuring the paper is to have a background section on B-RAI and B2N, and a separate section on BRAINet in which the distinction with other works and contribution is clearly written.


Reviews: Neural Arithmetic Logic Units

Neural Information Processing Systems

The paper proposes a method to extrapolate numbers outside the training range in neural networks. It proposes two models in order to solve this task. The neural accumulator that transforms a linear layer to output additions or subtractions and the neural arithmethic logic unit which can learn more complex mathematical functions. The tackled problem of extrapolation is an interesting one as neural networks generally perform not so well in such tasks. The paper itself is well motivated in the introduction and the example which compares different existing structures against each other adds additional clarity that this task should be studied.